Skip to content

chore(deps): update dependency litellm to v1.61.15 [security]#201

Open
renovate[bot] wants to merge 1 commit intodevelopfrom
renovate/pypi-litellm-vulnerability
Open

chore(deps): update dependency litellm to v1.61.15 [security]#201
renovate[bot] wants to merge 1 commit intodevelopfrom
renovate/pypi-litellm-vulnerability

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Mar 20, 2025

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
litellm ==1.44.8==1.61.15 age confidence

GitHub Vulnerability Alerts

CVE-2024-10188

A vulnerability in BerriAI/litellm, as of commit 26c03c9, allows unauthenticated users to cause a Denial of Service (DoS) by exploiting the use of ast.literal_eval to parse user input. This function is not safe and is prone to DoS attacks, which can crash the litellm Python server.

CVE-2025-0628

An improper authorization vulnerability exists in the main-latest version of BerriAI/litellm. When a user with the role 'internal_user_viewer' logs into the application, they are provided with an overly privileged API key. This key can be used to access all the admin functionality of the application, including endpoints such as '/users/list' and '/users/get_users'. This vulnerability allows for privilege escalation within the application, enabling any account to become a PROXY ADMIN.

CVE-2024-9606

In berriai/litellm before version 1.44.12, the litellm/litellm_core_utils/litellm_logging.py file contains a vulnerability where the API key masking code only masks the first 5 characters of the key. This results in the leakage of almost the entire API key in the logs, exposing a significant amount of the secret key. The issue affects version v1.44.9.

CVE-2024-8984

A Denial of Service (DoS) vulnerability exists in berriai/litellm version v1.44.5. This vulnerability can be exploited by appending characters, such as dashes (-), to the end of a multipart boundary in an HTTP request. The server continuously processes each character, leading to excessive resource consumption and rendering the service unavailable. The issue is unauthenticated and does not require any user interaction, impacting all users of the service.


Release Notes

BerriAI/litellm (litellm)

v1.61.7

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.61.3...v1.61.7

Docker Run LiteLLM Proxy

```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat


## Docker Run LiteLLM Proxy

```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed ❌ 180.0 206.98769618433857 6.145029010811349 6.145029010811349 1839 1839 146.21495699998377 3174.8161250000067
Aggregated Failed ❌ 180.0 206.98769618433857 6.145029010811349 6.145029010811349 1839 1839 146.21495699998377 3174.8161250000067

v1.61.3

What's Changed

Full Changelog: BerriAI/litellm@v1.61.1...v1.61.3

Docker Run LiteLLM Proxy

```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3

Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat


## Docker Run LiteLLM Proxy

```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Failed ❌ 110.0 127.51554087063036 6.408067444109619 6.408067444109619 1917 1917 94.95955199997752 2825.282969
Aggregated Failed ❌ 110.0 127.51554087063036 6.408067444109619 6.408067444109619 1917 1917 94.95955199997752 2825.282969

v1.61.1

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.61.0...v1.61.1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.1
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 160.0 180.272351294557 6.268555221678184 0.0 1874 0 118.979319999994 3618.562145999988
Aggregated Passed ✅ 160.0 180.272351294557 6.268555221678184 0.0 1874 0 118.979319999994 3618.562145999988

v1.61.0

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.60.8...v1.61.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.0
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 180.0 213.86169773089247 6.297834462789351 0.003342799608699231 1884 1 81.07622899996159 4173.802059999957
Aggregated Passed ✅ 180.0 213.86169773089247 6.297834462789351 0.003342799608699231 1884 1 81.07622899996159 4173.802059999957

v1.60.8

What's Changed

Full Changelog: BerriAI/litellm@v1.60.6...v1.60.8

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.8
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 170.0 189.56173781509457 6.206468643400922 0.0 1855 0 149.30551800000558 3488.08786699999
Aggregated Passed ✅ 170.0 189.56173781509457 6.206468643400922 0.0 1855 0 149.30551800000558 3488.08786699999

v1.60.6

Compare Source

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.60.5...v1.60.6

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.6
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 200.0 217.05167674521235 6.288425886864887 0.0 1880 0 164.17646499996863 2306.284880000021
Aggregated Passed ✅ 200.0 217.05167674521235 6.288425886864887 0.0 1880 0 164.17646499996863 2306.284880000021

v1.60.5

Compare Source

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.60.4...v1.60.5

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.5
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 251.44053604962153 6.19421782055854 0.0 1854 0 167.35073600000305 4496.06190000003
Aggregated Passed ✅ 210.0 251.44053604962153 6.19421782055854 0.0 1854 0 167.35073600000305 4496.06190000003

v1.60.4

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.60.2...v1.60.4

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.4
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 243.98647747354212 6.187158959524932 0.0033407985742575225 1852 1 94.81396500007122 3976.009301999966
Aggregated Passed ✅ 210.0 243.98647747354212 6.187158959524932 0.0033407985742575225 1852 1 94.81396500007122 3976.009301999966

v1.60.2

Compare Source

What's Changed

New Contributors

Full Changelog: BerriAI/litellm@v1.60.0...v1.60.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.2
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 170.0 187.78487681207412 6.365583292626693 0.0 1905 0 135.5453470000043 3644.0179759999864
Aggregated Passed ✅ 170.0 187.78487681207412 6.365583292626693 0.0 1905 0 135.5453470000043 3644.0179759999864

v1.60.0

What's Changed

Important Changes between v1.50.xx to 1.60.0

Known Issues

🚨 Detected issue with Langfuse Logging when Langfuse credentials are stored in DB

New Contributors

Full Changelog: BerriAI/litellm@v1.59.10...v1.60.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.60.0
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 240.0 281.07272626532927 6.158354312051399 0.0 1843 0 215.79772499995897 3928.489000000013
Aggregated Passed ✅ 240.0 281.07272626532927 6.158354312051399 0.0 1843 0 215.79772499995897 3928.489000000013

v1.59.10

Compare Source

What's Changed

Full Changelog: BerriAI/litellm@v1.59.9...v1.59.10

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.59.10
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed ✅ 210.0 239.24647793068146 6.21745665443628 0.00334092243655899 1861 1 73.25327600000264 3903.3159660000083
Aggregated Passed ✅ 210.0 239.24647793068146 6.21745665443628 0.00334092243655899 1861 1 73.25327600000264 3903.3159660000083

v1.59.9

Compare Source

What's Changed

New Contributors

  • [@​aymeric-roucher](htt

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@vercel
Copy link

vercel bot commented Mar 20, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
remembear ❌ Failed (Inspect) Aug 10, 2025 2:39pm

@deepsource-io
Copy link

deepsource-io bot commented Mar 20, 2025

Here's the code health analysis summary for commits 04f87b9..d798ec0. View details on DeepSource ↗.

Analysis Summary

AnalyzerStatusSummaryLink
DeepSource Python LogoPython✅ SuccessView Check ↗
DeepSource JavaScript LogoJavaScript✅ SuccessView Check ↗

💡 If you’re a repository administrator, you can configure the quality gates from the settings.

@renovate renovate bot force-pushed the renovate/pypi-litellm-vulnerability branch from 5e217b6 to 4878e30 Compare March 20, 2025 23:12
@renovate renovate bot changed the title chore(deps): update dependency litellm to v1.53.1 [security] chore(deps): update dependency litellm to v1.61.15 [security] Mar 20, 2025
@renovate renovate bot force-pushed the renovate/pypi-litellm-vulnerability branch from 4878e30 to d798ec0 Compare August 10, 2025 14:31
@coderabbitai
Copy link

coderabbitai bot commented Aug 10, 2025

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Join our Discord community for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants